Goto

Collaborating Authors

 software tool


A Distributed Emulation Environment for In-Memory Computing Systems

Bougioukou, Eleni, Petropoulos, Anastasios, Toulgaridis, Nikolaos, Chatzimichail, Theodoros, Antonakopoulos, Theodore

arXiv.org Artificial Intelligence

Abstract--In-memory computing technology is used extensively in artificial intelligence devices due to lower power consumption and fast calculation of matrix-based functions. The development of such a device and its integration in a system takes a significant amount of time and requires the use of a real-time emulation environment, where various system aspects are analyzed, microcode is tested, and applications are deployed, even before the real chip is available. In this work, we present the architecture, the software development tools, and experimental results of a distributed and expandable emulation system for rapid prototyping of integrated circuits based on in-memory computing technologies. Presented experimental results demonstrate the usefulness of the proposed emulator . Edge computing is a key technology for the successful deployment of the Internet of Things (IoT), by providing decentralized processing, offering high processing rates with low latency, thus resulting to efficient bandwidth usage and improved reliability, due to its fault tolerant approach [1]. One of the most promising technologies used in edge computing devices is In-Memory Computing (IMC), which utilizes volatile or non-volatile memory (NVM) cells to store the weights of matrix-based functions and to perform in-situ computations with reduced latency and less power consumption. Digital IMCs (DIMCs) are based on SRAMs or DRAMs, while analog IMCs (AIMC) modules are based on non-volatile memories, like PCM [2].


Live AI deepfake video makes 'pig butchering' scams more convincing

PCWorld

"Pig butchering" is an unsavory term for a very specific kind of phishing attack, wherein the scammer targets a wealthy individual with the lure of romance and then takes them for all they're worth. It's hardly a new idea -- they used to call this kind of thing "fleecing" -- but new software tools are making it a lot easier and more effective. A ring of scammers in Hong Kong managed to use live "deepfake" video to steal millions from their victims. Police in Hong Kong arrested 27 people who operated out of an office and conspired to rip off wealthy people by pretending to be attractive romance prospects and getting them to invest big bucks in phony cryptocurrency schemes. The methodology is familiar: Set yourself up as an attractive stranger with an alluring profile photo, slowly build up a rapport with the victim through text messages, and casually hint at the prospect of vast profits with a new crypto platform.


We Need to Control AI Agents Now

The Atlantic - Technology

In 2010--well before the rise of ChatGPT and Claude and all the other sprightly, conversational AI models--an army of bots briefly wiped out 1 trillion of value across the NASDAQ and other stock exchanges. Lengthy investigations were undertaken to figure out what had happened and why--and how to prevent it from happening again. The Securities and Exchange Commission's report on the matter blamed high-frequency-trading algorithms unexpectedly engaging in a mindless "hot potato" buying and selling of contracts back and forth to one another. A "flash crash," as the incident was called, may seem quaint relative to what lies ahead. That's because, even amid all the AI hype, a looming part of the AI revolution is under-examined: "agents." Agents are AIs that act independently on behalf of humans.


The Future of Scientific Publishing: Automated Article Generation

Harper, Jeremy R.

arXiv.org Artificial Intelligence

This study introduces a novel software tool leveraging large language model (LLM) prompts, designed to automate the generation of academic articles from Python code a significant advancement in the fields of biomedical informatics and computer science. Selected for its widespread adoption and analytical versatility, Python served as a foundational proof of concept; however, the underlying methodology and framework exhibit adaptability across various GitHub repo's underlining the tool's broad applicability (Harper 2024). By mitigating the traditionally time-intensive academic writing process, particularly in synthesizing complex datasets and coding outputs, this approach signifies a monumental leap towards streamlining research dissemination. The development was achieved without reliance on advanced language model agents, ensuring high fidelity in the automated generation of coherent and comprehensive academic content. This exploration not only validates the successful application and efficiency of the software but also projects how future integration of LLM agents which could amplify its capabilities, propelling towards a future where scientific findings are disseminated more swiftly and accessibly.


EEG-GPT: Exploring Capabilities of Large Language Models for EEG Classification and Interpretation

Kim, Jonathan W., Alaa, Ahmed, Bernardo, Danilo

arXiv.org Artificial Intelligence

Large language models (LLMs) such as ChatGPT have garnered substantial attention in the media and among the machine learning (ML) community. LLMs represent a pivotal paradigm shift in artificial intelligence (AI), consisting of transformer architectures substantially larger in scale compared to their predecessors, such as Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks [1], and leverage internet-scale text corpora, thus excelling not only on text completion tasks but demonstrating emergent capabilities in rudimentary language reasoning [2, 3]. LLMs display several features conducive to the small data regime present in most EEG datasets, where the largest datasets typically have on the order of only thousands of EEGs. Primarily, LLMs have the capability to perform fewand even zero-shot learning [4]. Recent research has investigated how LLMs can perform few-shot learning in domains ranging from cancer drug synergy prediction to cardiac signal analysis [5, 6]. Other work has demonstrated the ability of LLMs to outperform experts in annotating political Twitter messages with zero-shot learning [7]. Additionally, previous work has shown that transformer architectures are capable of utilizing in-context learning for zero-shot tasks - in other words, utilizing information provided in the prompt in order to yield better performance on various tasks [8].


Causally Linking Health Application Data and Personal Information Management Tools

Luz, Saturnino, Masoodian, Masood

arXiv.org Artificial Intelligence

The proliferation of consumer health devices such as smart watches, sleep monitors, smart scales, etc, in many countries, has not only led to growing interest in health monitoring, but also to the development of a countless number of ``smart'' applications to support the exploration of such data by members of the general public, sometimes with integration into professional health services. While a variety of health data streams has been made available by such devices to users, these streams are often presented as separate time-series visualizations, in which the potential relationships between health variables are not explicitly made visible. Furthermore, despite the fact that other aspects of life, such as work and social connectivity, have become increasingly digitised, health and well-being applications make little use of the potentially useful contextual information provided by widely used personal information management tools, such as shared calendar and email systems. This paper presents a framework for the integration of these diverse data sources, analytic and visualization tools, with inference methods and graphical user interfaces to help users by highlighting causal connections among such time-series.


Validation of a Zero-Shot Learning Natural Language Processing Tool for Data Abstraction from Unstructured Healthcare Data

Kaufmann, Basil, Busby, Dallin, Das, Chandan Krushna, Tillu, Neeraja, Menon, Mani, Tewari, Ashutosh K., Gorin, Michael A.

arXiv.org Artificial Intelligence

Objectives: To describe the development and validation of a zero-shot learning natural language processing (NLP) tool for abstracting data from unstructured text contained within PDF documents, such as those found within electronic health records. Materials and Methods: A data abstraction tool based on the GPT-3.5 model from OpenAI was developed and compared to three physician human abstractors in terms of time to task completion and accuracy for abstracting data on 14 unique variables from a set of 199 de-identified radical prostatectomy pathology reports. The reports were processed by the software tool in vectorized and scanned formats to establish the impact of optical character recognition on data abstraction. The tool was assessed for superiority for data abstraction speed and non-inferiority for accuracy. Results: The human abstractors required a mean of 101s per report for data abstraction, with times varying from 15 to 284 s. In comparison, the software tool required a mean of 12.8 s to process the vectorized reports and a mean of 15.8 to process the scanned reports (P < 0.001). The overall accuracies of the three human abstractors were 94.7%, 97.8%, and 96.4% for the combined set of 2786 datapoints. The software tool had an overall accuracy of 94.2% for the vectorized reports, proving to be non-inferior to the human abstractors at a margin of -10% ($\alpha$=0.025). The tool had a slightly lower accuracy of 88.7% using the scanned reports, proving to be non-inferiority to 2 out of 3 human abstractors. Conclusion: The developed zero-shot learning NLP tool affords researchers comparable levels of accuracy to that of human abstractors, with significant time savings benefits. Because of the lack of need for task-specific model training, the developed tool is highly generalizable and can be used for a wide variety of data abstraction tasks, even outside the field of medicine.


Adept, a startup training AI to use existing software and APIs, raises $350M

#artificialintelligence

In another sign that the current VC appetite for AI is insatiable, Adept, a startup building AI that "enables humans and computers to work together creatively to solve problems," yesterday announced that it raised $350 million in a Series B funding round co-led by General Catalyst and Spark Capital with participation from Addition, Greylock, Atlassian Ventures, Microsoft, Nvidia, Workday Ventures, Caterina Fake, Frontiers Capital, PSP Growth, SV Angel and A.Capital. Forbes reports that the valuation was "at least" $1 billion. The cash injection brings Adept's total raised to $415 million, which co-founder and CEO David Luan says is being put toward productization, model training and headcount growth. "Giant foundation models for language and for images have shown astounding capabilities in the last few years. Adept is building on this momentum via a new kind of foundation model that can perform actions on any software tool using natural language," he said in a press release.


Understanding URDF: A Survey Based on User Experience

Tola, Daniella, Corke, Peter

arXiv.org Artificial Intelligence

With the increasing complexity of robot systems, it is necessary to simulate them before deployment. To do this, a model of the robot's kinematics or dynamics is required. One of the most commonly used formats for modeling robots is the Unified Robot Description Format (URDF). The goal of this article is to understand how URDF is currently used, what challenges people face when working with it, and how the community sees the future of URDF. The outcome can potentially be used to guide future research. This article presents the results from a survey based on 510 anonymous responses from robotic developers of different backgrounds and levels of experience. We find that 96.8% of the participants have simulated robots before, and of them 95.5% had used URDF. We identify a number of challenges and limitations that complicate the use of URDF, such as the inability to model parallel linkages and closed-chain systems, no real standard, lack of documentation, and a limited number of dynamic parameters to model the robot. Future perspectives for URDF are also determined, where 53.5% believe URDF will be more commonly used in the future, 12.2% believe other standards or tools will make URDF obsolete, and 34.4% are not sure what the future of URDF will be. Most participants agree that there is a need for better tooling to ensure URDF's future use.


User Study for Improving Tools for Bible Translation

Mathew, Joel, Hermjakob, Ulf

arXiv.org Artificial Intelligence

Technology has increasingly become an integral part of the Bible translation process. Over time, both the translation process and relevant technology have evolved greatly. More recently, the field of Natural Language Processing (NLP) has made great progress in solving some problems previously thought impenetrable. Through this study we endeavor to better understand and communicate about a segment of the current landscape of the Bible translation process as it relates to technology and identify pertinent issues. We conduct several interviews with individuals working in different levels of the Bible translation process from multiple organizations to identify gaps and bottlenecks where technology (including recent advances in AI) could potentially play a pivotal role in reducing translation time and improving overall quality.